The word "thinking" itself is human centric, and the process by which a machine infers and synthesizes may be entirely different from how a human, biological, wetware brain/body system does it. Just consider the degree to which bodily needs and survival shapes our thinking. I recon a machine would be a lot "flatter", i.e. not have our daily cycles based on sleep, 90minute cycles of more and less focus (ref. Huberman) or monthly hormonal cycles that influence our thinking, or puberty etc. Etc.
This thought was spurred by the quote [[q.The question of whether Machines Can Think... is about as relevant as the question of whether Submarines Can Swim.]] by Edsger DIjkstra in "The threats to computing science", and granted, it is from 1984, and back then whatever machines could do, or what they were envisioned to do in the future might not have been as close to the imitation neural networks we set up today, but I still believe that since [[every psychological event has a biological correllate]] that the "psychology" of a machine intelligence (or machine intelligences, in case of a whole class distinct types) might be so distinct from ours, that we might not recognise it entirely as thinking, or conscious.
Until now we have defined consciousness as one thing, with levels along one axis. Less and more. A little conscious like an ant, a lot conscious like a human, very very conscious like a Buddha, and super conscious like a god. Maybe we need to introduce another axis, or think of it as a spectrum - where a spectrum has wavelengths of different amplitudes. Think colours and flavours.
Via:: [[q.The question of whether Machines Can Think... is about as relevant as the question of whether Submarines Can Swim.]] Topic:: [[AI]] related:: [[../Notes/Consciousness]]1